18,819 research outputs found

    The Determinants of Management Expenses

    Get PDF
    This paper develops a model which explains the determinants of the management expenses charged by U.S. equity funds. The study shows that for high quality managers, an increase in quality is associated with higher fees. In contrast, as the quality of the lower quality managers deteriorates, their fees increase. A non-linear negative relationship is found between the size of a fund and its management expenses. Economies of scope are also shown to exist between the number of funds within a mutual fund complex and the management expenses charged investors. Finally, while 12b-1 fees have been thought of as a substitute for load charges, this paper suggests that they are complements.

    Preconditioning Kernel Matrices

    Full text link
    The computational and storage complexity of kernel machines presents the primary barrier to their scaling to large, modern, datasets. A common way to tackle the scalability issue is to use the conjugate gradient algorithm, which relieves the constraints on both storage (the kernel matrix need not be stored) and computation (both stochastic gradients and parallelization can be used). Even so, conjugate gradient is not without its own issues: the conditioning of kernel matrices is often such that conjugate gradients will have poor convergence in practice. Preconditioning is a common approach to alleviating this issue. Here we propose preconditioned conjugate gradients for kernel machines, and develop a broad range of preconditioners particularly useful for kernel matrices. We describe a scalable approach to both solving kernel machines and learning their hyperparameters. We show this approach is exact in the limit of iterations and outperforms state-of-the-art approximations for a given computational budget

    Kamland Results

    Full text link
    The LMA solution of the solar neutrino problem has been explored with the 1,000 ton liquid scinatillator detector, KamLAND. It utilizes nuclear power reactors distributing effectively 180km from the experimental site. Comparing observed neutrino rate with the calculation of reactor operation histories, an evidence for reactor neutrino disapearance has been obtained from 162 ton-year exposure data. This deficit is only compatible with the LMA solution and the other solutions in the two neutrino oscillation hypotheisis are excluded at 99.95% confidence level.Comment: 8 pages, 5 figures, proceeding of the Moriond Conference "Electroweak Interactions and Unified Thories

    Loop-corrected belief propagation for lattice spin models

    Full text link
    Belief propagation (BP) is a message-passing method for solving probabilistic graphical models. It is very successful in treating disordered models (such as spin glasses) on random graphs. On the other hand, finite-dimensional lattice models have an abundant number of short loops, and the BP method is still far from being satisfactory in treating the complicated loop-induced correlations in these systems. Here we propose a loop-corrected BP method to take into account the effect of short loops in lattice spin models. We demonstrate, through an application to the square-lattice Ising model, that loop-corrected BP improves over the naive BP method significantly. We also implement loop-corrected BP at the coarse-grained region graph level to further boost its performance.Comment: 11 pages, minor changes with new references added. Final version as published in EPJ

    Data-driven Distributionally Robust Optimization Using the Wasserstein Metric: Performance Guarantees and Tractable Reformulations

    Full text link
    We consider stochastic programs where the distribution of the uncertain parameters is only observable through a finite training dataset. Using the Wasserstein metric, we construct a ball in the space of (multivariate and non-discrete) probability distributions centered at the uniform distribution on the training samples, and we seek decisions that perform best in view of the worst-case distribution within this Wasserstein ball. The state-of-the-art methods for solving the resulting distributionally robust optimization problems rely on global optimization techniques, which quickly become computationally excruciating. In this paper we demonstrate that, under mild assumptions, the distributionally robust optimization problems over Wasserstein balls can in fact be reformulated as finite convex programs---in many interesting cases even as tractable linear programs. Leveraging recent measure concentration results, we also show that their solutions enjoy powerful finite-sample performance guarantees. Our theoretical results are exemplified in mean-risk portfolio optimization as well as uncertainty quantification.Comment: 42 pages, 10 figure
    corecore